Learning the kernel matrix via predictive low-rank approximations

نویسندگان

  • Martin Strazar
  • Tomaz Curk
چکیده

Efficient and accurate low-rank approximations to multiple data sources are essential in the era of big data. The scaling of kernel-based learning algorithms to large datasets is limited by the O(n) complexity associated with computation and storage of the kernel matrix, which is assumed to be available in most recent multiple kernel learning algorithms. We propose a method to learn simultaneous low-rank approximations of a set of base kernels in regression tasks. We present the mklaren algorithm, which approximates multiple kernel matrices with least angle regression in the lowdimensional feature space. The idea is based on geometrical concepts and does not assume access to full kernel matrices. The algorithm achieves linear complexity in the number of data points as well as kernels, while it accounts for the correlations between kernel matrices. When explicit feature space representation is available for kernels, we use the relation between primal and dual regression weights to gain model interpretation. Our approach outperforms contemporary kernel matrix approximation approaches when learning with multiple kernels on standard regression datasets, as well as improves selection of relevant kernels in comparison to multiple kernel learning methods.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Sharp analysis of low-rank kernel matrix approximations

We consider supervised learning problems within the positive-definite kernel framework, such as kernel ridge regression, kernel logistic regression or the support vector machine. With kernels leading to infinite-dimensional feature spaces, a common practical limiting difficulty is the necessity of computing the kernel matrix, which most frequently leads to algorithms with running time at least ...

متن کامل

On the Impact of Kernel Approximation on Learning Accuracy

Kernel approximation is commonly used to scale kernel-based algorithms to applications containing as many as several million instances. This paper analyzes the effect of such approximations in the kernel matrix on the hypothesis generated by several widely used learning algorithms. We give stability bounds based on the norm of the kernel approximation for these algorithms, including SVMs, KRR, ...

متن کامل

Asymptotic error bounds for kernel-based Nyström low-rank approximation matrices

Many kernel-based learning algorithms have the computational load scaled with the sample size n due to the column size of a full kernel Gram matrix K. This article considers the Nyström low-rank approximation. It uses a reduced kernel ?̂?, which is n×m, consisting of m columns (say columns i1, i2,···, im) randomly drawn from K. This approximation takes the form K ≈ ?̂?U?̂?, where U is the reduced ...

متن کامل

Asymptotic error bounds for kernel-based Nystrm low-rank approximation matrices

• Many kernel-based learning algorithms have the computational load. • The Nyström low-rank approximation is designed for reducing the computation. • We propose the spectrum decomposition condition with a theoretical justification. • Asymptotic error bounds on eigenvalues and eigenvectors are derived. • Numerical experiments are provided for covariance kernel and Wishart matrix. AMS subject cla...

متن کامل

Learning low-rank output kernels

Output kernel learning techniques allow to simultaneously learn a vector-valued function and a positive semidefinite matrix which describes the relationships between the outputs. In this paper, we introduce a new formulation that imposes a low-rank constraint on the output kernel and operates directly on a factor of the kernel matrix. First, we investigate the connection between output kernel l...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • CoRR

دوره abs/1601.04366  شماره 

صفحات  -

تاریخ انتشار 2016